Opass: an Online Portfolio Assessment and Diagnosis Scheme to Support Web-based Scientific Inquiry Experiments
نویسندگان
چکیده
Promoting the development of students’ scientific inquiry capabilities is a major learning objective in science education. As a result, teachers require effective assessment approaches to evaluate students’ scientific inquiry-related performance. Teachers must also be able to offer appropriate supplementary instructions, as needed, to students. Scientific inquiry capabilities should be assessed by evaluating students’ scientific inquiry portfolios in actual hands-on experiments. Although virtual laboratory systems can reduce the cost of conducting scientific inquiry experiments, the manual portfolio assessment approach is still difficult and time-consuming for teachers. Therefore, in this paper, in order to provide students with personalized learning guidance concerning not only the conceptual knowledge, but also the high-order, integrative abilities of scientific inquiry, an Online Portfolio Assessment and Diagnosis Scheme, called OPASS, was proposed to assist teachers in automatically assessing and diagnosing students’ abilities as they relate to scientific inquiry performance. Personalized diagnostic reports were generated by employing the rule-based inference approach, which diagnosed learning problems and provided corresponding reasons and remedial suggestions based on teacher-defined assessment knowledge of the scientific inquiry experiment. For the evaluation, experimental results showed that the OPASS was helpful and beneficial for both students and teachers. INTRODUCTION Today, Scientific Inquiry (SI)-based learning receives widespread attention. The purpose of such learning is to promote students’ knowledge and understanding of scientific ideas as well as how scientists study the natural world (National Research Council [NRC], 1996). If students possess scientific inquiry skills, they are capable of conducting an investigation, collecting evidence from a variety of sources, developing an explanation from the data, and communicating and defending their conclusions (National Science Teacher Association [NSTA], 2004; Handelsman, et al., 2004). Educators should teach students to learn and acquire not only conceptual knowledge, but also scientific inquiry skills. Consequently, the assessment concerning scientific inquiry is necessary and required to foster knowledge and skills of inquiry-based learning. In general, the traditional paper-and-pencil test is a suitable approach to measure students’ knowledge of science concepts and scientific inquiry, e.g., Substantive Knowledge. However, it is not easy to assess and evaluate learning problems and performance of higher-order capabilities related to scientific inquiry, e.g., Procedural Knowledge, and Problem Solving and Integrative Abilities (Wenning, 2007; Jacobs-Sera, Hatfull, & Hanauer, 2009; Bennett, Persky, Weiss, & Jenkins, 2007, 2010). Furthermore, scientific inquiry can be considered as a set of process skills that consists of questioning, hypothesis-making, experimenting, recording, analyzing, and concluding, which can be regarded as "hands-on" learning (NRC, 1996, NSTA, 2004; Ketelhut, Dede, & Clarke, 2010). Nevertheless, learning and assessing in the physical laboratory are inconvenient and time-consuming for both teachers and students (Hanauer, Hatfull, & Jacobs-Sera, 2009, pp. 117-118). With this in mind, a significant amount of research has been dedicated to develop the virtual and Web-based interactive learning systems to support online scientific inquiry learning (Yaron et al., 2008; Hsu, Wu, & Hwang, 2008; Dalgarno, Bishop, Adlong, & Bedgood, 2009; Yaron, Karabinos, Davenport, Leinhardt, & Greeno, 2009; Yaron, Karabinos, Lange, Greeno, & Leinhardt, 2010; Ketelhut et al., 2010). Through this type of learning, students can efficiently improve and foster their experiences and skills based on scientific inquiry learning activities, and their portfolios can be collected for the further analysis. Using TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 152 students’ portfolios collected from inquiry-based learning activities to manually assess scientific inquiry is an ideal approach (Zachos, Hick, Doanne, & Sargent, 2000; Lunsford & Melear, 2004), but it is not easy to perform and it is time-consuming for teachers (Zachos, 2004; Jacobs-Sera et al., 2009, Hanauer et al., 2009, pp. 117-118, Bennett et al., 2007, 2010). Moreover, many articles also argue that students should be provided with not only the score of test, but also the individual learning guidance for improving their learning performance. For this reason, several analysis and diagnosis approaches have also been proposed to assess the learning portfolio of students and then offer them the personalized learning guidance related to misconceived notions of a given subject (Hwang, 2003; Kosba, Dimitroca, & Boyle, 2007; Chu, Hwang, & Huang, 2010; Panjaburee, Hwang, Triampo, & Shih, 2010) and scientific inquiry skill scores (Ting, Zadeh, & Chong, 2006; Ting, Phon-Amnuaisuk, & Chong, 2008; Bennett et al., 2007, 2010). Nevertheless, an analysis of learning problems related to scientific inquiry skills needs to diagnose the operational and procedural portfolios so students can understand their learning status and problems in relation to not only scores and concepts, but also to operations and skills of scientific inquiry. Therefore, in this paper, to provide students with personalized learning guidance concerning not only the conceptual knowledge, but also the high-order, integrative abilities of scientific inquiry, an Online Portfolio Assessment and Diagnosis Scheme, called OPASS, has been proposed. The OPASS is able to efficiently evaluate students’ assessment portfolios collected from the Web-based scientific inquiry experiment. It employs the rule-based inference approach to automatically diagnose learning problems related to concepts, cause and effect operations, and skills of scientific inquiry according to teacher-defined assessment knowledge of the scientific inquiry experiment. Consequently, students can be provided with personalized scientific inquiry diagnostic reports to improve not only subject concepts, but scientific inquiry skills as well. RELATED WORKS Assessments of Scientific Inquiry The knowledge and capabilities of scientific inquiry are multidimensional (NRC, 1996; Wenning, 2007; Hanauer et al., 2009, pp. 11-21) and can be divided into three types: (1) Substantive Knowledge, e.g., scientific concepts, facts, and processes; (2) Procedural Knowledge, e.g., procedural aspects of conducting a scientific inquiry; and (3) Problem Solving and Integrative Abilities, e.g., the ability to solve problems, pose solutions, conceptualize results, and reach conclusions (Jacobs-Sera et al., 2009, p. 36). Therefore, the assessment concerning scientific inquiry is necessary and required to foster inquiry-based learning. Hence, in order to assess the scientific inquiry levels of students, Zachos et al. (2000) proposed critical “scientific inquiry capabilities” as assessment measures, whereby a series of structured performance tasks were designed to investigate students’ competence in conducting scientific inquiry. Zachos (2004) then proposed that the students’ responses, presented with the structured performance tasks, should be recorded and assessed based on scientific inquiry capabilities (Zachos et al., 2000) because the direct observation of performance is not feasible within educational systems and it is time-consuming for both teachers and students (Hanauer et al., 2009, pp. 117-118). A paper-and-pencil, 35-item Scientific Inquiry Literacy Test (ScInqLiT), developed by Wenning (2007), is a diagnostic multiple choice test of knowledge relevant for scientific inquiry based on a defined form of scientific literacy. This test can be used to measure students’ scientific inquiry knowledge and it is ideal for the preand post-testing measures. However, Wenning (2007) also suggested that ScInqLiT should be regarded as an indicator of students’ abilities only because procedural knowledge should be assessed by means of performance tests. Furthermore, based on the concept of avoiding direct assessment of students’ scientific inquiry process knowledge, Lunsford and Melear (2004) used the final product of scientific inquiry activity (e.g., portfolios, laboratory practices, and student demonstrations) to assess and infer the learning status and performance concerning scientific inquiry capabilities of students. To assess scientific inquiry performance, Hanauer et al. (2009, pp. 39-42) defined the characteristics of the Authentic Scientific Inquiry Assessment (ASIA) and thus proposed the active assessment development procedure, which consists of five stages: (1) Empirical Description of Scientific Inquiry; (2) Definition of Educational Aims; (3) Assessment Tool Development; (4) Scoring Rubric Development; and (5) Assessment Piloting. Based on this development procedure, a case study referred to as the Phage Hunters Integrating Research and Education (PHIRE) program, was proposed to address how specific assessment strategies and tools were constructed and implemented (Hanauer et al., 2009, pp. 55-113). The PHIRE program aims to introduce students to the scientific process, and to emphasize the involvement of students who have little scientific training, but are curious about science and the natural world in which we live. Therefore, it was designed as a 10-step program, consisting of the following: (1) Phage Isolation; (2) Phage Purification; (3) Phage Amplification; (4) Electron Microscopy; (5) Nucleic Acid Extraction and Restriction Analysis; (6) DNA Sequencing; (7) Genome Annotation; (8) Comparison of the DNA Sequence to Known Genome; (9) Comparative Genome Analysis; and TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 153 (10) Publication. These steps are used to train and assess participating students. The PHIRE assessment strategy covers formative diagnostic and summative aims of an scientific inquiry education pertaining to the bacteriophage subject. The strategy includes five assessment tools to assess and evaluate the performance of students’ scientific inquiry skills: (1) the Substantive Knowledge Test; (2) the Physical Checklist; (3) the Visual Literacy Test; (4) the Notebook Assessment Tool; and (5) the Knowledge Presentation Performance Test. Each test consisted of either multiple choice questions, open-ended questions, or observations. However, the practical issues of space, time, and money become significant problems to perform the PHIRE program, although it can offer students individual assessment and diagnostic reports of scientific inquiry (Hanauer et al., 2009, pp. 117-118). Conducting the scientific inquiry assessment by means of inquiry-based learning activities related to definitions of scientific inquiry capabilities appears to be an ideal approach (Zachos et al., 2000; Lunsford & Melear, 2004), but it is not easy to perform and it is time-consuming to manually assess the portfolio (Zachos, 2004; Hanauer et al., 2009, pp. 117-118). In addition, it can also be difficult to evaluate learning problems and performance of higher-order capabilities related to scientific inquiry through the use of traditional paper-and-pencil tests (Wenning, 2007; Bennett et al., 2007, 2010, Jacobs-Sera et al., 2009). Virtual and Web-Based Interactive Learning Environments Scientific inquiry, as a set of process skills, which consists of questioning, hypothesis-making, experimenting, recording, analyzing, and concluding, can be regarded as "hands-on" learning (NRC, 1996, NSTA, 2004; Ketelhut et al. 2010). Therefore, students need to experience and practice the scientific inquiry-based activity in the physical laboratory in order to efficiently foster and acquire the skills of scientific inquiry. However, practicing in the physical laboratory is not convenient and it is time-consuming for both teachers and students (Zachos, 2004; Jacobs-Sera et al., 2009, Hanauer et al., 2009, pp. 117-118, Bennett et al., 2007, 2010). A significant amount of research has been dedicated to the development of virtual and Web-based interactive learning systems to support online scientific inquiry learning. A virtual laboratory, called ChemCollective (2010; Yaron et al., 2008, 2009), was developed to allow students to design and carry out their own experiments. Therefore, Yaron et al. (2010) created activities, which enable students to use their chemistry knowledge to practice and resolve problems. According to their results, homework using the virtual laboratory with real-world scenarios contributes significantly to learning. In addition, the virtual laboratory can record all student interactions for the further analysis. Dalgarno et al. (2009) also apply the 3D simulated virtual environment, called the Virtual Chemistry Laboratory, which can be used by distance university chemistry students for familiarization with the laboratory. Teaching students to efficiently learn and acquire scientific inquiry skills is not easy for teachers. Therefore, Ketelhut et al. (2010) proposed a novel pedagogy to infuse inquiry into a standards-based science curriculum by means of a Multi-User Virtual Environment (MUVE), called River City, in order to enhance students’ motivation and improve their overall learning performance of scientific inquiry. In this MUVE, students can make observations, pose questions, access information, gather and analyze data, plan investigations, propose answers and explanations, and communicate the results. The experimental results also show that students were able to conduct inquiries in the virtual worlds and were motivated by that process. To improve learning effectiveness, computer simulations, animations, and Web-based interactive content have also been used in many courses and curriculums (Hameed, Hackling, & Garnett, 1993; Windschitl & Andre 1998; Salajan et al., 2009). Hsu et al. (2008) proposed a Technology-Enhanced Learning (TEL) environment to support science learning related to the causes of the seasons, where a Web-based interactive simulation tool was applied to support students’ explorations. Students can test and evaluate their hypothesis and learned concepts. Although the aforementioned virtual and Web-based interactive learning environments can enhance students’ motivation, foster students’ experiences, and improve students’ learning performance, the assessment and diagnosis of an individual student still need to be performed and manually analyzed by teachers according to the collected data within a given student’s portfolio. Analysis and Diagnosis of the Learning Portfolio To analyze learning portfolios, Chen, Liu, Ou, and Liu (2000; Chang, Chen, & Ou, 1998) applied decision tree and data cube techniques to analyze the learning behaviors of students and to discover pedagogical rules related to students’ learning performance from Web logs. These logs include the amount of article reading/posting, question-asking, login, etc. According to their proposed approach, teachers can easily observe learning processes and analyze learning behaviors of students for pedagogical needs. However, this approach cannot provide automatic analysis. In order to automatically diagnose learning problems, Hwang (2003) proposed a Concept-Effect Relationship (CER) model to represent prerequisite relationships among concepts of a course, which can be used to evaluate a student’s learning status, which may then, in turn, provide that student with the diagnostic report that not only denotes the score, but also the description of any misconceptions. Afterwards, to solve the problem that a concept might contain a hierarchical structure of knowledge with different degrees of difficulty, Chu et al. (2010) defined an Enhanced Concept-Effect Relationship (ECER) to assist teachers in identifying relationships among concepts and their multiple knowledge levels. They then proposed a learning TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 154 diagnosis algorithm to analyze a student’s learning problems and personalized learning guidance was offered. Based on the concept of the ECER model, a multi-expert approach has also been proposed to integrate the opinions of multiple experts in order to obtain high quality relationships between a test item and concept in the ECER model (Panjaburee et al., 2010). To address the problem of generating automatic feedback for teachers, a Teacher ADViser (TADV) system was developed (Kosba, et al., 2007). TADV defined the knowledge model based on the concept map of a course in relation to the individual student, group, and class, and then a feedback generation algorithm using the fuzzy approach was proposed to analyze tracking data of students. Consequently, learning feedback, including conceptual learning performance and possible learning suggestions, will be automatically generated for both the teacher and student. These analytical and diagnostic approaches (Hwang, 2003; Kosba et al., 2007; Chu et al., 2010; Panjaburee et al., 2010) previously mentioned are able to automatically analyze a student’s learning portfolio and generate individualized learning guidance and feedback for both teachers and students; only the diagnosis concerning the conceptual knowledge is taken into account. Considering the automatic assessment of scientific inquiry skills, Ting et al. (2008) proposed a Dynamic Decision Network (DDN) model in the INQPRO, a scientific inquiry exploratory learning environment for learning Physics (Ting et al., 2006), to assess the mastery of two temporal, variable, scientific inquiry skills of students, i.e., Hypothesis Formulation and Variable Identification. The proposed DDN model can be generated dynamically by integrating various INQPRO Graphical User Interfaces (GUIs) in real-time. In the INQPRO system, students are first required to make a hypothesis statement to elucidate their selected scenarios. Afterwards, students can actively interact with GUIs, and an animated pedagogical agent will give them the tailored suggestions and interventions according to assessment results consisting of three mastery levels (i.e., mastery, partial mastery, non-mastery) of two scientific inquiry skills. However, the tailored interventions only considered the limited suggestions in terms of three mastery levels and incorrect GUI operations of two scientific inquiry skills. The various learning problems, concerning conceptual knowledge, cause and effect operations, and skills of scientific inquiry, with corresponding reasons and remedial suggestions, were not taken into consideration. Additionally, to measure problem solving with technology, the National Assessment of Educational Progress (NAEP) Technology-Based Assessment Project developed a Technology-Rich Environments (TRE) in the domain of physical science surrounding helium gas balloons (Bennett et al., 2007, 2010). In the TRE search scenario, students needed to use a simulated World Wide Web environment to locate and synthesize information regarding scientific helium balloons. Students were then to answer one constructed response question and four multiple-choice questions related to the uses and science of gas-balloon flight. In the TRE simulation scenario that followed, students could use an interactive simulation tool to experiment with solving problems about relationships among buoyancy, mass, and volume. The TRE employed Evidence-Centered Design (ECD) (Mislevy, Almond, & Lukas, 2003) to develop the interpretive framework consisting of student and evidence models for translating the multiplicity of actions collected from each student into inferences. The student model represented a set of hypotheses about the components of proficiency in a domain and thus defined two primary assessment skills: scientific inquiry and computer skills. The evidence model showed how relevant student actions were connected to those assessment skills; evidence was captured by computer and consisted of student actions called “observables.” The TRE used the scoring criteria called “evaluation rules” to assess the accuracy of observables, and used a modeling procedure based on Bayesian networks (Mislevy, Almond, Yan, & Steinberg, 2000) to create the summary scores of skills. Therefore, by means of the TRE assessment process, problem solving capabilities of students can be assessed and scored. Nevertheless, data collected in the assessment portfolio still needs to be manually evaluated by reviewers and detailed diagnoses concerning skill problems must be further developed. The aforementioned research and systems either provide students with limited diagnostic feedback, e.g., conceptual knowledge and summary skill level scores (Ting et al., 2008), or performed the manual assessment (Hanauer et al., 2009; Bennett et al., 2007, 2010). However, analysis of learning problems regarding scientific inquiry capabilities needs to diagnose the operational and procedural portfolio of students. With this need in mind, our main concern is how to propose a novel, online, automatic assessment and diagnosis scheme to efficiently provide students with descriptive diagnostic feedback, corresponding explanations, and remedial suggestions to correct learning problems concerning conceptual knowledge, cause and effect operations, and skills of scientific inquiry. ONLINE PORTFOLIO ASSESSMENT AND DIAGNOSIS SCHEME Problem Description As stated previously, Scientific Inquiry (SI) as a set of process skills, which consists of questioning, hypothesis-making, experimenting, recording, analyzing, and concluding, can be regarded as "hands-on" learning (NRC, 1996, NSTA, 2004; Ketelhut et al. 2010). Although the virtual and Web-based interactive learning systems can be used to enhance learning performance of scientific inquiry (Yaron et al., 2008, 2009, TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 155 2010; Hsu et al., 2008; Dalgarno et al., 2009; Ketelhut et al., 2010), to manually assess scientific inquiry competencies according to the students’ portfolios collected from the inquiry-based learning activities is still difficult and time-consuming for teachers (Zachos, 2004; Hanauer et al., 2009, pp. 117-118; Bennett et al., 2007, 2010). Besides, the limited diagnostic feedback, e.g., conceptual knowledge and summary scores of skills (Ting et al., 2008) cannot allow students to thoroughly understand their learning problems in terms of scientific inquiry. Therefore, in this paper, to provide students with personalized learning guidance concerning not only conceptual knowledge, but also with higher-order knowledge of scientific inquiry capabilities (Wenning, 2007; Jacobs-Sera et al., 2009), pressing issues remain about how to efficiently analyze students’ assessment portfolios and automatically offer them the individual diagnostic reports related to concepts, cause and effect operations, skills of scientific inquiry, and related remedial suggestions. The following three issues must be solved: (1) How to model and define useful and meaningful assessment knowledge, which can be defined by teachers or domain experts, to correctly present conceptual and evaluation knowledge for the assessment of an scientific inquiry experiment. (2) How to efficiently analyze learning problems according to the assessment portfolio collected by a Web-based scientific inquiry experiment based on the teacher-defined assessment knowledge. (3) How to generate a personalized diagnostic report concerning any learning problems related to the concepts, cause and effect operations, and scientific inquiry skills to improve the overall understanding of scientific inquiry. Framework of the Online Portfolio Assessment and Diagnosis Scheme According to the issues mentioned in previous sections, an Online Portfolio Assessment and Diagnosis Scheme, called OPASS, has been proposed. The OPASS framework is shown in Figure 1. This scheme employ the rule-based inference approach to efficiently and automatically evaluate students’ assessment portfolios of a Web-based scientific inquiry experiment and then diagnose learning problems concerning concepts, cause and effect operations, and scientific inquiry skills according to the teacher-defined assessment knowledge. It can further provide students with personalized scientific inquiry diagnostic reports to improve not only subject concepts, but also scientific inquiry capabilities. Figure 1: Framework of the OPASS The OPASS includes two phases described as follows: 1. Assessment Knowledge Definition of Scientific Inquiry Experiment Phase: In order to correctly assess a student’s portfolio of the Web-based scientific inquiry experiment, the assessment knowledge consisting of (1) Experiment Knowledge and (2) Evaluation Knowledge must be defined in advance by the teacher, as shown at TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 156 the top of Figure 1. Experiment knowledge, defined as the knowledge structure, includes the concept map of a subject and the skill map of scientific inquiry, and is used to represent required concepts and skills that students need to understand and acquire in the scientific inquiry experiment. Therefore, to assess the students’ capabilities of concepts and skills, steps of experiment planning and actions of the operation experiment in the assessment procedure of the Web-based scientific inquiry experiment can thus be associated with experiment knowledge. Moreover, to check the accuracy of the students’ assessment portfolios, evaluation knowledge including the Key Operation Action Pattern (KOAP) and the Assessment Rule (AR) must also be defined. The KOAP is proposed to define the key operational actions and sequences, which influence the correctness of operational data in the operation experiment. Hence, based on the KOAP and the experiment knowledge, the assessment rule is proposed to evaluate the accuracy of the students’ assessment portfolios and to further identify problems related to scientific inquiry. 2. Online Assessment Portfolio Diagnosis Process Phase: In order to efficiently provide students with personalized scientific inquiry diagnostic reports, this phase, which consists of three modules, has been proposed to automatically evaluate and diagnose learning problems according to students’ assessment portfolios, then to generate personalized diagnostic reports. The three modules include the following: • Evaluation Process: uses the teacher-defined Assessment Rule (AR) to evaluate the correctness of the student’s assessment portfolio of the Web-based scientific inquiry experiment. • Diagnosis Process: diagnoses learning problems of concepts, cause and effect operations, and skills of scientific inquiry by means of the proposed Diagnostic Rule (DR). • Diagnostic Report Generation: generates the personalized scientific inquiry diagnostic report with descriptions, corresponding reasons, and remedy suggestions of learning problems based on the defined Description Format. Details of each phase will be described in the following sections. Assessment Knowledge Definition of Scientific Inquiry Experiment Phase As mentioned above, in order to automatically assess and diagnose students’ scientific inquiry learning problems according to their assessment portfolios of a Web-based scientific inquiry experiment, the assessment knowledge of the scientific inquiry experiment must be predefined by the teacher (Matthews, Pharr, Biswas, & Neelakandan, 2000; Hwang, 2003; Chu et al., 2010; Panjaburee et al. 2010). Therefore, in the OPASS, the assessment knowledge, which consists of experiment knowledge and evaluation knowledge, has been proposed. Figure 2 shows the relation model of assessment knowledge in the OPASS. In the OPASS, each Experiment Step of the assessment procedure in the Web-based scientific inquiry experiment and each definition of the Key Operation Action Pattern (KOAP) can be associated with concepts and skills of the teacher-defined knowledge structure. Based on this aforementioned relational definition, the teacher-defined Assessment Rule (AR), represented by the IF (Condition Setting) THEN (Assessment Function) rule format, is able to evaluate the assessment portfolios and diagnose conceptual problems, cause and effect operations, and scientific inquiry skills, where the Assessment Function uses the Problem definition to check whether students have a problem or not for the corresponding experiment step. Each assessment function is also associated with the corresponding Diagnostic Knowledge including Problem Description, Reason, and Suggestion, which will be further used to generate the diagnostic reports by the proposed Diagnostic Rule (DR). Each definition of the Assessment Knowledge (AK) will be described in following subsections. Figure 2: Relation Model of the Assessment Knowledge in the OPASS TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 157 Definitions of the Experiment Knowledge In order to assess students’ experimental portfolios, the experiment knowledge related to the scientific inquiry experiment need to be defined in advance. Therefore, in the OPASS, two kinds of knowledge structures have to be defined by the teacher: the concept map of a subject and the skill map of scientific inquiry. The former denotes necessary concepts that students need to learn and understand, and the latter denotes the required skills students need to be equipped with in this assessment experiment. The concept map and the skill map used in the OPASS are defined as follows, respectively. Definition of the Concept Map (CM): CM=(C, R), where: • C = {c1, c2,..., cn}: ci represents the main concept in a subject • R = {cr1, cr2,..., crm}: cri represents the Relation Type between two concepts in a CM, where the Relation Type is defined as the APO: ci is A Part Of cj, or the PR: ci is the Prerequisite of ck. Here, the CM, consisting of a set of concepts (ci) with two types of relations, i.e., A-Part-Of relations (APO) and Prerequisite Relations (PR), is a hierarchical structure of concepts of a subject. By means of these relational definitions among concepts, learning problems related to subject concepts can thus be found and diagnosed for a student. Figure 3 depicts an example of a partial CM of a Biology Transpiration Experiment, where the concept Phenomenon has three sub-concepts: Transpiration, Photosynthesis, and Capillarity, and prerequisite concepts of transpiration are water transportation and Capillarity. Figure 3: Example of a Partial CM of the Biology Transpiration Experiment Definition of the Skill Map (SM) for Scientific Inquiry: SM=(S, R), where: • S = {s1, s2,..., sn}: si represents a Skill of Scientific Inquiry Skills. • R = {sr1, sr2,..., srm}: sri represents the Relation Type between two skills in a SM, where the Relation Type is defined as the APO: si is A Part Of sj, or the D: si is Dependence on sk. The structure of the SM for scientific inquiry is the same as the CM, expect for cross-link relation definitions, Dependence Relations (D), which represent cause-and-effect relations between two skills. For example, Figure 4 illustrates an example of a partial SM for the scientific process, where the skill, Setting Variables, depends on the skill, Making Hypothesis. Figure 4: Example of a Partial Scientific Process Skill Map for the Scientific Inquiry Experiment Definitions of the Evaluation Knowledge Definitions of the Key Operation Action Patterns: TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 158 During the Web-based scientific inquiry experiment, students will be asked to operate the Web-based operation experiment tool, which emulates the actual experiment operation, and their behavior will be collected and regarded as Operational Data of the scientific inquiry assessment portfolio. However, an important problem is how to automatically assess and evaluate operational data of students. Therefore, in the OPASS, the Key Operation Action Patterns (KOAP) has been proposed to evaluate the accuracy of students’ operational data. The KOAP defines key operational actions and sequences, which will influence the operational accuracy of the Web-based operation experiment tool. Accordingly, the teacher can define the necessary KOAP to observe and evaluate students’ operational data. The definitions related to the Experiment Operations (EO) and KOAP in terms of the Web-based operation experiment are defined as follows: Definitions of the EO: EO={a1, a2,..., an}: denotes all actions that a student can operate in terms of a Web-based operation experiment tool in the scientific inquiry assessment experiment. Definitions of the KOAP: KOAP=(KA, AC, AS, OC), where: • KA={ai, aj,..., am | 0 the amount of KA n of EO}: denotes the Key Action (KA), each action (ai) of which plays an important action of all operational actions in EO, whose accuracy will influence the accuracy of the whole EO. • AC=(ai, ai+1, ai+2,...): denotes the Action Continuity (AC), which is an action sequence with continuous actions. • AS=(ai, ai+j,...,ai+k | i<j<k): denotes the Action Sequence (AS), which is an action sequence, but its continuity is not necessary. • OC=(ai, ai+1, ai+2,...): denotes the Object Continuity (OC), which is a continuous action sequence for a targeted object. Therefore, according to the definition of the KOAP, the accuracy of a student’s operational portfolio of a Web-based operation experiment tool can thus be automatically assessed, analyzed, and diagnosed. Table 1 illustrates examples of the KOAP with descriptions. Table 1: Illustration with the Description of each KOAP Type Illustration Description Key Action (KA) [Filling] the [Red Water] in a [cup without scale] into the [Beaker with Scale] is a Key Action (KA). Action Continuity (AC) In order to sniff out the fire correctly, [Action 1] must be followed by [Action 2] and it’s not allowable to operate other actions between them. Action Sequence (AS) AS=(a1, a2, a5, a8) is a correct operational action sequence to finish the operation experiment, where [Action 2] must be done before [Action 5], but other actions can be operated between Action 2 and Action 5. Object Continuity (OC) For the targeted object, Celery, it must be [Cut] only after [Dip into water]. It will be regarded as the incorrect operation if there are other actions between them. Definitions of the Assessment Rules (AR): In the OPASS, the rule-based inference approach has been applied to infer the accuracy of the assessment experiment according to a student’s assessment portfolio. Therefore, the teacher can define assessment rules in advance to evaluate the accuracy of a student’s answer and to identify learning problems related to subject TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 159 concepts, cause and effect operations, and skills of scientific inquiry. The assessment rule can be defined by the following definition. Definitions of the AR: AR={Ar1, Ar2,..., Arn}, where: • Ari=If (Condition Setting) Then (Assessment Function): each Ari of AR can be represented by the IF-THEN rule format, where: Condition Setting ={Cs1, Cs2,..., Csm} : each Csi of the Condition Setting can be used to evaluate the accuracy of the student’s answer in terms of the assessment portfolio consisting of planning data and operational data defined in Section: Definitions of Assessment Portfolio. If the result of the Condition Setting is true, the Assessment Function will be triggered to evaluate the student’s assessment portfolio. In the OPASS, the Predicate Function (Giarratano & Riley, 2004) has been applied to be the function used in the AR. A predicate function is defined to be any function that returns TRUE or FALSE. Therefore, any value other than FALSE is considered as TRUE. The predicate function always returns a Boolean value. The Assessment Function used in the AR is defined as follows. Definitions of the Assessment Function in AR: WrongStep(Stepi, Problemi): checks the experiment Stepi of the assessment procedure, which was executed correctly or not during the Web-based scientific inquiry experiment, where: Stepi: the name of an experiment step in the scientific inquiry assessment experiment. Problemi: denotes a checking predicate function, which can check whether a student made this kind of problem at an executed experiment Stepi. Therefore, each Problemi has its corresponding checking predicate function definition, which can be extended and defined by the teacher according to requirements of the assessment, such as: ObjectContinuity_Error(objk, ActionSequencem, WrongPatternn): checks the accuracy of the continuity of the object (objk) defined in the KOAP according to the comparison between the correct Object Continuity (OC) (ActionSequencem) and the student-made action pattern, which will be regarded as WrongPatternn if it is not the correct experimental operation. IndependentVariable_Error(objk, IF-Statementn, Then-Statementn): checks the accuracy of the independent variable of the object (objk) according to the hypothesis setting (IF-Statementn and Then-Statementn), defined in Section: Definitions of Assessment Portfolio, that the student made. Example 1: If a student dipped a stalk of celery into water and then used a knife to cut its root during the virtual operation experiment, the accuracy of this experimental operation the student made can thus be checked by defined Assessment Functions, WrongStep( "Action Operation", ObjectContinuity_Error([celery], [dip in water] [cut root] [put into tank] [waiting], [dip in water] [cut root]). Therefore, students’ operational actions, i.e., [dip in water] [cut root], are not correct because the correct object continuity definition (OC) of Key Operation Action Patterns (KOAP) was defined as [dip in water] [cut root] [put into tank] [waiting]. Moreover, the accuracy of the hypothesis setting can also be checked by the WrongStep("Operational Experiment”, IndependentVariable_Error([celery], [cross section area of celery stem], [the decreasing quantity of the red water]). Condition Setting Function in AR: In addition to the assessment function, the condition setting of the AR can also use the predicate function to check the condition of a rule. Therefore, in the OPASS, the Condition Setting={Cs1, Cs2,..., Csm}, where, for instance, the Csi=NotMatch(ObjectContinuity(targeted object, correct OC definition): evaluates the accuracy between the correct OC definition and the student’s operational actions in terms of the targeted object, or Csj=(TargetObject(obj) & IdependentVriable(X) & CorrectIdependentVriable(Y) & (X≠Y)): evaluates the accuracy between the correct independent variable (Y) and the actual one that the student set (X) in terms of the targeted object (obj) and the condition will be true if the (X≠Y) is true. Example 2: Assume there are Ar1=If ( NotMatch( ObjectContinuity([celery], { [dip in water], [cut root], [put into tank] [waiting]}) ) Then WrongStep( "Action Operation", ObjectContinuity_Error([celery], {[dip in water], [cut root], [put into tank], [waiting]), {[dip in water], [cut root]}), and Ar2 = If (TargetObject([celery]) & IdependentVriable([length of stem]) & CorrectIdependentVriable([amount of leaves]) & ([length of stem]≠[amount of leaves])) Then WrongStep("Operational Experiment”, IndependentVariable_Error([celery], TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 160 [cross section area of celery stem], [the decreasing quantity of the red water]) ). Therefore, the Assessment Function, WrongStep(), will be triggered if the Condition Setting of the Ar1 or the Ar2 is true. Definitions of the Assessment Portfolio As seen in Figure 1, the assessment portfolio of scientific inquiry consists of planning data and operational data. Before the assessment process, the log of the Web-based experiment system must be transformed into the defined format in the OPASS. Logs of planning data, as shown in Table 2, are the set of attribute-value pairs. For example, in an experiment of biology transpiration, students defined a hypothesis: If the [celery]’s [leaves] are [more], the [decreasing quantity] of the [red water] is [more]. Then, logs recorded six attributes, including objects, attributes, and their changes in the condition and effect parts of the hypothesis. Table 2: Example Logs of Planning Data Attribute Value Attribute Value Hypothesis-IF-Object Celery Hypothesis-THEN-Object Red water Hypothesis-IF-Attribute Leaves Hypothesis-THEN-Attribute Decreasing quantity Hypothesis-IF-Value More Hypothesis-THEN-Value More Logs of operational data, as shown in Table 3, were a sequence of operations, which consists of an action name, a used object, an object of target, and a set of environmental attribute-value pairs. For example, the action sequence in Table 3 described that a student [fill] a [beaker with scale] with [red water]. Then, the student [dip] a [head of celery] into a [tank] and use a [knife] to [cut] the [stem of the celery]. Afterward, this student [put] the [celery] into the [beaker with scale] and [waited]. Table 3: Example Logs of Operational Data Action Used Object Target Object Environmental Status Fill Red water Beaker with scale Temperature: 25°C, Light: Yes, Humility: 60% Dip Celery Tank Temperature: 25°C, Light: Yes, Humility: 60% Cut Knife Celery Temperature: 25°C, Light: Yes, Humility: 60% Put Celery Beaker with scale Temperature: 25°C, Light: Yes, Humility: 60% Wait Temperature: 25°C, Light: Yes, Humility: 60% Online Assessment Portfolio Diagnosis Process Phase By means the teacher-defined assessment knowledge related to the scientific inquiry experiment described in the previous section, the student’s assessment portfolio can thus be automatically evaluated and diagnosed by the Online Assessment Portfolio Diagnosis Process (OAPDP) in phase 2 of the OPASS. The details will be described in this section. Procedure of the Online Assessment Portfolio Diagnosis Process Figure 5 shows the flowchart of the OAPDP, which consists of three modules: (1) Evaluation Process; (2) Diagnosis Process; and (3) Diagnostic Report Generation. In the Evaluation Process, the OAPDP uses the teacher-defined Assessment Rule (AR) to evaluate the accuracy of the students’ scientific inquiry assessment portfolio and then finds the Wrong Experiment Step from the assessment result according to the inference results of the Rule Inference Process. Afterwards, in the Diagnosis Process, the OAPDP first diagnoses the mis-concept/skill with the corresponding reason for each wrong experiment step by means of the Diagnosis Rule (DR) based on the relation model of assessment knowledge as seen in Figure 2. The OAPDP further analyzes the Remedial Path according to relational definitions of the experiment knowledge, i.e., the prerequisite (PR) in the CM and the Dependence (D) in the SM of scientific inquiry. Consequently, the Major mis-concept/skill with the corresponding wrong experiment step can be discovered. Finally, the Diagnostic Report Generation module is able to generate the personalized scientific inquiry diagnostic report consisting of descriptions, corresponding reasons, and related remedial suggestions to correct learning problems based on the defined Description Format. TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 161 Figure 5: Flowchart of the OAPDP Diagnosis Process in the OAPDP As mentioned above, the Diagnosis Process module in the OAPDP uses the Diagnosis Rule (DR) based on the relation model of assessment knowledge to diagnose the mis-concept/skill with the corresponding reason for each wrong experiment step. In the OPASS, the DR has thus been proposed and defined as follows. Definitions of the DR: DR={Dr1, Dr2,..., Drn}, where: Dri=If (Condition Setting) Then (Diagnostic Function): each Dri of the DR can be represented by the IF-THEN rule format, where three types of DRs are defined as follows: (1) DRs of the Mis-Concept, Mis-Skill, and Reason: If (WrongStep($S, $P) & StepConceptRelation(WrongStep($S, $P), $Concept)) Then MisConcept($Concept): diagnoses the mis-concept (MisConcept()) according to the relationship between the wrong experiment step (WrongStep()) and the associated concept by the function StepConceptRelation(). The $S and $P denote the Stepi and the Problemi of Assessment Function, WrongStep(), in AR. If (WrongStep($S, $P) & StepSkillRelation (WrongStep ($S, $P), $Skill)) Then MisSkill($Skill): diagnoses the mis-skill according to the relationship between the wrong experiment step and associated skill of scientific inquiry by the function StepSkillRelation(). If (WrongStep($S, $P) & StepReasonRelation( WrongStep($S, $P), $Type, $Desc)) Then Reason($Type, $Desc): diagnoses the corresponding reason of occurred mis-concept or mis-skill according to the relationship between the wrong experiment step and associated reason, where Type is “Concept” or “Skill,” each of which has a corresponding description ($Desc) to explain the reason for a problem that a student made for the wrong experiment step. (2) DRs of the Major Wrong Step of Assessment Experiment: If (MajorMisSkill($Skill) & WrongStep($S,$P) & StepSkillRelation( WrongStep($S, $P), $Skill) ) Then MajorWrongStep($S, $P): diagnoses the major wrong experiment steps of a student according to the relationship between the wrong experiment and the major mis-skill. (3) DRs of the Remedial Concept and Skill of Mis-Concept and Mis-Skill: If (MajorMisConcept($Cx) & Prerequisite($Cy, $Cx)) Then PRConcept($Cy): diagnoses the remedial concept of the student’s mis-concpet according to the prerequisite concept relationship (Prerequisite()) of the major mis-concept. IF(MajorMisSkill($Sx) & Prerequisite($Sy, $Sx)) Then PRSkill($Sy) : diagnoses the remedial skill of the student’s mis-skill according to the prerequisite skill relationship (Prerequisite()) of the major mis-skill. Table 4 lists examples of the DR Definition and Table 5 also presents examples of the Assessment Function Definition, WrongStep($S, $P), associated with the Problem Description, the Reason, and the Suggestion TOJET: The Turkish Online Journal of Educational Technology – April 2011, volume 10 Issue 2 Copyright The Turkish Online Journal of Educational Technology 162 Description. The learning problems related to the concepts, cause and effect operations, and skills of scientific inquiry can thus be analyzed and diagnosed by means of the proposed DR. Table 4: Example of Three Types in the DR Definition Type IF (Condition Setting) THEN Symbol Definitions $S1="Operational Experiment” $S2=" Action Operation” $P1=IndependentVariable_Error([celery], [cross section area of celery stem], [the decreasing quantity of the red water]) ) $P2= ObjectContinuity_Error([celery], {[dip in water], [cut root], [put into tank], [waiting]), {[dip in water], [cut root]} Dr1 WrongStep($S1, $P1) & StepConceptRelation( WrongStep($S1, $P1)), "Transpiration") Dr2 WrongStep($S2, $P2) & StepConceptRelation( WrongStep($S2, $P2)), "Transpiration") MisConcept("Transpiration")
منابع مشابه
Assessment and treatment of childhood apraxia of speech: An inquiry into knowledge and experience of speech-language pathologists
Objectives: The present research aimed to identify the assessment and treatment processes implemented by Iranian speech-language pathologists (SLPs) for CAS and to investigate the possibility of impact of their knowledge level and years of experience on their choice of assessment and treatment. Methods: A cross-sectional method using survey design was employed to obtain a sample of 260 SLPs w...
متن کاملApplication of Portfolio in Nursing Education and Assessment of Learning
Introduction: Portfolio is one of the active learning strategies for clinical education. By making portfolio, students present their own projects including clinical learning activities at or near the end of a clinical course. The purpose of this study was to investigate the application of this tool in nursing education in order to know the advantage and limitation of the tool. Methods: This st...
متن کاملMultiple Intelligences, Dialogic-Based Portfolio Assessment, and the Enhancement of Higher Order Thinking
Abstract Controversy has not been yet resolved among L2 researchers as how to enhance higher-order thinking skills (HOTSs) in EFL contexts. Responding to the growing need to foster thinking skills, many L2 educators have recently attempted to investigate the effect of diverse teaching strategies on HOTS. Yet, few studies have focused on the infusion of Gardner’s (1999) theory of multiple intell...
متن کاملPsychometric Properties of a Persian Version of the Specialty Indecision Scale: A Preliminary Study
Introduction: Diagnosis and management of specialty choice indecision is an important part of career guidance and support for medical students. Determining causes of indecision and resolving them helps students to make an optimum decision. The aim of this study was to determine the psychometric properties of a Persian version of the specialty indecision scale as an on-line questionnaire for med...
متن کاملThe Effect of Portfolio Assessment on the Development of Metadiscourse Markers Awareness in EFL Learners' Oral Performance
Portfolio assessment as one of the alternatives to testing is defined as the systematic collection of student work measured against predetermined scoring criteria. This paper aimed to investigate the effect of portfolio assessment in the oral performance of EFL learners in an attempt to examine its impact on their metadiscourse awareness. To determine the impact of portfolio assessment on the o...
متن کامل